8 research outputs found

    Avoiding Geometry Improvement in Derivative-Free Model-Based Methods via Randomization

    Full text link
    We present a technique for model-based derivative-free optimization called \emph{basis sketching}. Basis sketching consists of taking random sketches of the Vandermonde matrix employed in constructing an interpolation model. This randomization enables weakening the general requirement in model-based derivative-free methods that interpolation sets contain a full-dimensional set of affinely independent points in every iteration. Practically, this weakening provides a theoretically justified means of avoiding potentially expensive geometry improvement steps in many model-based derivative-free methods. We demonstrate this practicality by extending the nonlinear least squares solver, \texttt{POUNDers} to a variant that employs basis sketching and we observe encouraging results on higher dimensional problems

    Structure-Aware Methods for Expensive Derivative-Free Nonsmooth Composite Optimization

    Full text link
    We present new methods for solving a broad class of bound-constrained nonsmooth composite minimization problems. These methods are specially designed for objectives that are some known mapping of outputs from a computationally expensive function. We provide accompanying implementations of these methods: in particular, a novel manifold sampling algorithm (\mspshortref) with subproblems that are in a sense primal versions of the dual problems solved by previous manifold sampling methods and a method (\goombahref) that employs more difficult optimization subproblems. For these two methods, we provide rigorous convergence analysis and guarantees. We demonstrate extensive testing of these methods. Open-source implementations of the methods developed in this manuscript can be found at \url{github.com/POptUS/IBCDFO/}

    TROPHY: Trust Region Optimization Using a Precision Hierarchy

    No full text
    We present an algorithm to perform trust-region-based optimization for nonlinear unconstrained problems. The method selectively uses function and gradient evaluations at different floating-point precisions to reduce the overall energy consumption, storage, and communication costs; these capabilities are increasingly important in the era of exascale computing. In particular, we are motivated by a desire to improve computational efficiency for massive climate models. We employ our method on two examples: the CUTEst test set and a large-scale data assimilation problem to recover wind fields from radar returns. Although this paper is primarily a proof of concept, we show that if implemented on appropriate hardware, the use of mixed-precision can significantly reduce the computational load compared with fixed-precision solvers.Comment: 14 pages, 2 figures, 2 table
    corecore